Aiming at the Multi-access Edge Computing (MEC) server data transmission requirements of high reliability, low latency and large data volume, a Media Access Control (MAC) scheduling strategy based on conflict-free access, priority architecture and elastic service technology for the vehicle edge computing scenario was proposed. The proposed strategy was based on the centralized coordination of channel access rights by the Road Side Unit (RSU) of the Internet of Vehicles (IoV), which prioritized the link transmission quality between the On Board Unit (OBU) and the MEC server in the vehicle network, so that the Vehicle-to-Network (V2N) service data could be transmitted in a timely manner. At the same time, an elastic service approach was adopted for services between local OBUs to enhance the reliability of emergency message transmission when dense vehicles were accessed. First, a queuing analysis model was constructed for the scheduling strategy. Then, the embedded Markov chains were established according to the non-aftereffect characteristics of the system state variables at each moment, and the system was analyzed theoretically by the analysis method of probability generating functions to obtain the exact analytical expressions of key indicators such as the average queue length, and the average waiting latency of MEC server communication units and OBUs, and RSU query period. Computer simulation experimental results show that the statistical analysis results are consistent with the theoretical calculation results, and the proposed scheduling strategy can improve the stability and flexibility of the IoV under high load conditions.
Two optimization methods for quantum simulator implemented on Sunway supercomputer were proposed aiming at the problems of gradual scaling of quantum hardware and insufficient classical simulation speed. Firstly, the tensor contraction operator library SWTT was reconstructed by improving the tensor transposition strategy and computation strategy, which improved the computing kernel efficiency of partial tensor contraction and reduced redundant memory access. Secondly, the balance between complexity and efficiency of path computation was achieved by the contraction path adjustment method based on data locality optimization. Test results show that the improvement method of operator library can improve the simulation efficiency of the "Sycamore" quantum supremacy circuit by 5.4% and the single-step tensor contraction efficiency by up to 49.7 times; the path adjustment method can improve the floating-point efficiency by about 4 times with the path computational complexity inflated by a factor of 2. The two optimization methods have the efficiencies of single-precision and mixed-precision floating-point operations for the simulation of Google’s 53-bit, 20-layer quantum chip random circuit with a million amplitude sampling improved from 3.98% and 1.69% to 18.48% and 7.42% respectively, and reduce the theoretical estimated simulation time from 470 s to 226 s for single-precision and 304 s to 134 s for mixed-precision, verifying that the two methods significantly improve the quantum computational simulation speed.
The optimization process of Remora Optimization Algorithm (ROA) includes three modes: attaching to host, empirical attack and host foraging, and the exploration ability and exploitation ability of this algorithm are relatively strong. However, because the original algorithm switches the host through empirical attack, it will lead to the poor balance between exploration and exploitation, slow convergence and being easy to fall into local optimum. Aiming at the above problems, a Modified ROA (MROA) based on chaotic host switching mechanism was proposed. Firstly, a new host switching mechanism was designed to better balance the abilities of exploration and exploitation. Then, in order to diversify the initial hosts of remora, Tent chaotic mapping was introduced for population initialization to further optimize the performance of the algorithm. Finally, MROA was compared with six algorithms such as the original ROA and Reptile Search Algorithm (RSA) in the CEC2020 test functions. Through the analysis of the experimental results, it can be seen that the best fitness value, average fitness value and fitness value standard deviation obtained by MROA are better than those obtained by ROA, RSA, Whale Optimization Algorithm (WOA), Harris Hawks Optimization (HHO) algorithm, Sperm Swarm Optimization (SSO) algorithm, Sine Cosine Algorithm (SCA), and Sooty Tern Optimization Algorithm (STOA) by 28%, 33%, and 12% averagely and respectively. The test results based on CEC2020 show that MROA has good optimization ability, convergence ability and robustness. At the same time, the effectiveness of MROA in engineering problems was further verified by solving the design problems of welded beam and multi-plate clutch brake.
In Hematoxylin-Eosin (HE)-stained pathological images, the uneven distribution of cell staining and the diversity of various tissue morphologies bring great challenges to automated segmentation. Traditional convolutions cannot capture the correlation features between pixels in a large neighborhood, making it difficult to further improve the segmentation performance. Therefore, a Multi-Channel Segmentation Network with gated axial self-attention (MCSegNet) model was proposed to achieve accurate segmentation of nuclei in pathological images. In the proposed model, a dual-encoder and decoder structure was adopted, in which the axial self-attention encoding channel was used to capture global features, while the convolutional encoding channel based on residual structure was used to obtain local fine features. The feature representation was enhanced by feature fusion at the end of the encoding channel, providing a good information base for the decoder. And in the decoder, segmentation results were gradually generated by cascading multiple upsampling modules. In addition, the improved hybrid loss function was used to alleviate the common problem of sample imbalance in pathological images effectively. Experimental results on MoNuSeg2020 public dataset show that the improved segmentation method is 2.66 percentage points and 2.77 percentage points higher than U-Net in terms of F1-score and Intersection over Union (IoU) indicators, respectively, and effectively improves the pathological image segmentation effect and the reliability of clinical diagnosis.
To solve the problems of the mutual interference of multi-channel ElectroEncephaloGraphy (EEG), the different classification results caused by individual differences, and the low recognition rate of single domain features, a method of channel selection and feature fusion was proposed. Firstly, the acquired EEG was preprocessed, and the important channels were selected by using Gradient Boosting Decision Tree (GBDT). Secondly, the Generalized Predictive Control (GPC) model was used to construct the prediction signals of important channels and distinguish the subtle differences among multi-dimensional correlation signals, then the SE-TCNTA (Squeeze and Excitation block-Temporal Convolutional Network-Temporal Attention) model was used to extract temporal features between different frames. Thirdly, the Pearson correlation coefficient was used to calculate the relationship between channels, the frequency domain features of EEG and the control values of prediction signals were extracted as inputs, the spatial graph structure was established, and the Graph Convolutional Network (GCN) was used to extract the features of frequency domain and spatial domain. Finally, the above two features were input to the fully connected layer for feature fusion in order to realize the classification of EEG. Experimental results on public dataset BCICIV_2a show that in the case of channel selection, compared with the first EEG-inception model for ERP detection and DSCNN (Shallow Double-branch Convolutional Neural Network) model that also uses double branch feature extraction, the proposed method has the classification accuracy increased by 1.47% and 1.69% respectively, and has the Kappa value increased by 1.25% and 2.53% respectively. The proposed method can improve the classification accuracy of EEG and reduce the influence of redundant data on feature extraction, so it is more suitable for Brain-Computer Interface (BCI) systems.
Aiming at the problem of resource auction mechanism in cloud environment, a more General multi-unit FAlse-name-proof auction mechanism for vIrTual macHine allocation (GFAITH) was studied and designed. First, the system model was formally defined. Then, around the design goals of being truthfulness and false-name-proof, it was proved that when considering the diversity of user demands, a new form of cheating, Demand-Reduction (DR) cheating, would emerge, which could destroy the truthful and false-name-proof properties, and the experimental results show that it would seriously affect the system performance. Based on the above, the GFAITH was proposed to achieve the design goals in three stages: user pre-processing, pre-allocation and pricing, and resisting demand reduction cheating. It is theoretical proved that the resource allocation of GFAITH is feasible and able to resist false-name-proof. Experimental results show that GFAITH can effectively guarantee the performance of the system from indicators such as revenue and social wealth, verifying the effectiveness and efficiency of the proposed mechanism.
Aiming at the problem that the current image defect detection models have poor detection effect on tail categories in long-tail defect datasets, a GGW-DND Loss (Gradient-Guide Weighted-Deferred Negative Gradient decay Loss) was proposed. First, the positive and negative gradients were re-weighted according to the cumulative gradient ratio of the classification nodes in the detector in order to reduce the suppressed state of tail classifier. Then, once the model was optimized to a certain stage, the negative gradient generated by each node was sharply reduced to enhance the generalization ability of the tail classifier. Experimental results on the self-made image defect dataset and NEU-DET (NEU surface defect database for Defect dEtection Task) show that the mean Average Precision (mAP) for tail categories of the proposed loss is better than that of Binary Cross Entropy Loss (BCE Loss), the former is increased by 32.02 and 7.40 percentage points respectively, and compared with EQL v2 (EQualization Loss v2), the proposed loss has the mAP increased by 2.20 and 0.82 percentage points respectively, verifying that the proposed loss can effectively improve the detection performance of the network for tail categories.
With the rapid development of Internet of Things (IoT), a large amount of data generated in edge scenarios such as sensors often needs to be transmitted to cloud nodes for processing, which brings huge transmission cost and processing delay. Cloud-edge collaboration provides a solution for these problems. Firstly, on the basis of comprehensive investigation and analysis of the development process of cloud-edge collaboration, combined with the current research ideas and progress of intelligent cloud-edge collaboration, the data acquisition and analysis, computation offloading technology and model-based intelligent optimization technology in cloud edge architecture were analyzed and discussed emphatically. Secondly, the functions and applications of various technologies in intelligent cloud-edge collaboration were analyzed deeply from the edge and the cloud respectively, and the application scenarios of intelligent cloud-edge collaboration technology in reality were discussed. Finally, the current challenges and future development directions of intelligent cloud-edge collaboration were pointed out.
Aiming at the lack of consideration of the psychological behaviors of decision makers in software quality evaluation methods, a TOmada de Decisao Interativa e Multicritevio (TODIM) software quality evaluation method based on interval 2-tuple linguistic information was proposed. Firstly, interval 2-tuple linguistic information was used to characterize the evaluation information of experts for software quality. Secondly, the subjective and objective weights of software quality attributes were calculated by subjective weighting method and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) respectively. On this basis, the comprehensive weights of software quality attributes were obtained by combined weighting method. Thirdly, in order to better describe the psychological behaviors of experts in the process of software quality evaluation, TODIM was introduced into software quality evaluation. Finally, the method was used to evaluate the software quality of assistant dispatcher terminal in high-speed railway dispatching system. The result shows that the third assistant dispatcher terminal software provided by the railway software supplier has the highest dominance value and its quality is the best. The results of comparing this method with the regret theory and Preference Ranking Organization METHod for Enrichment Evaluations (PROMETHEE-II) show that the three methods are consistent in the selection of the best quality software, but the overall rankings of the three methods are somewhat different, indicating that the constructed method has strong superiority in describing the interaction between multiple criteria and the psychological behaviors of decision makers.
As a semantic knowledge base, Knowledge Graph (KG) uses structured triples to store real-world entities and their internal relationships. In order to infer the missing real triples in the knowledge graph, considering the strong triple representation ability of relational memory network and the powerful feature processing ability of capsule network, a knowledge graph embedding model of capsule network based on relational memory was proposed. First, the encoding embedding vectors were formed through the potential dependencies between encoding entities and relationships and some important information. Then, the embedding vectors were convolved with the filter to generate different feature maps, and the corresponding capsules were recombined. Finally, the connections from the parent capsule to the child capsule was specified through the compression function and dynamic routing, and the confidence coefficient of the current triple was estimated by the inner product score between the child capsule and the weight. Link prediction experimental results show that compared with CapsE model, on the Mean Reciprocal Rank (MRR) and Hit@10 evaluation indicators, the proposed model has the increase of 7.95% and 2.2 percentage points respectively on WN18RR dataset, and on FB15K-237 dataset, the proposed model has the increase of 3.82% and 2 percentage points respectively. Experiments results show that the proposed model can more accurately infer the relationship between the head entity and the tail entity.
With the development of smart power grid, Unmanned Aerial Vehicles (UAVs) are more and more widely used for inspection of transmission lines. In order to effectively implement fault location and type judgment of transmission lines, UAVs are required to transmit videos and images with high resolution. Under the condition of limited bandwidth, it is necessary to improve the spectral efficiency of UAV return communication link as much as possible to meet the transmission rate requirements of high-resolution videos and images. A video image transmission communication method based on mesh network was proposed. By deploying the wireless access nodes on tower and building Mesh network, the communication devices carried by UAVs could communicate with the built Mesh network as the network nodes at any time. After capturing a video of the failure on transmission lines, the video could be quickly transmitted to the data center by UAVs. For this purpose, the communication module of the patrol UAV was equipped with a large-scale antenna array, in millimeter wave frequency band, a heuristic point-to-point directional hybrid beamforming method was adopted to improve the spectral efficiency of receiving communication link. The simulation results show that the performance of the proposed method is better than that of the Orthogonal Matching Pursuit (OMP) method and is closer to that of the fully digital beamforming method.
Concering the problem that the Cell type?Specificity (CS) and similarity and difference information between different cell types are not properly used when predicting Differential Gene Expression (DGE) with large?scale Histone Modification (HM) data, as well as large volume of input and high computational cost, a deep learning?based method named dcsDiff was proposed. Firstly, multiple AutoEncoders (AEs) and Bi?directional Long Short?Term Memory (Bi?LSTM) networks were introduced to reduce the dimensionality of HM signals and model them to obtain the embedded representation. Then, multiple Convolutional Neural Networks (CNNs) were used to mine the HM combined effects in each single cell type, and the similarity and difference information of each HM and joint effects of all HMs between two cell types. Finally, the two kinds of information were fused to predict DGE between two cell types. In the comparison experiments with DeepDiff on 10 pairs of cell types in the REMC (Roadmap Epigenomics Mapping Consortium) database, the Pearson Correlation Coefficient (PCC) of dcsDiff in DGE prediction was increased by 7.2% at the highest and 3.9% on average, the number of differentially expressed genes accurately detected by dcsDiff was increased by 36 at most and 17.6 on average, and the running time of dcsDiff was saved by 78.7%. The validity of reasonable integration of the above two kinds of information was proved in the component analysis experiment. The parameters of dcsDiff were also determined by experiments. Experimental results show that the proposed dcsDiff can effectively improve the efficiency of DGE prediction.
In commercial digital cameras, due to the limitation of Complementary Metal Oxide Semiconductor (CMOS) sensors, there is only one color channel information for each pixel in the sampled image. Therefore, the Color image DeMosaicking (CDM) algorithm is required to restore the full-color images. However, most of the existing Convolutional Neural Network (CNN)-based CDM algorithms cannot achieve satisfactory performance with relatively low computational complexity and small network parameter number. To solve this problem, a CDM network based on Inter-channel Correlation and Enhanced Information Distillation (ICEID) was proposed. Firstly, to fully utilize the inter-channel correlation of the color image, an inter-channel guided reconstruction structure was designed to obtain the initial CDM result. Secondly, an Enhanced Information Distillation Module (EIDM), which can effectively extract and refine features from image with relatively small parameter number, was presented to enhance the reconstructed full-color image in high efficiency. Experimental results demonstrate that compared with many state-of-the-art CDM methods, the proposed algorithm achieves significant improvement in both objective quality and subjective quality, and has relatively low computational complexity and small network parameter number.
Aiming at the sharp increasing of data on the cloud caused by the development and popularization of cloud native technology as well as the bottlenecks of the technology in performance and stability, a Haystack-based storage system was proposed. With the optimization in service discovery, automatic fault tolerance and caching mechanism, the system is more suitable for cloud native business and meets the growing and high-frequent file storage and read/write requirements of the data acquisition, storage and analysis industries. The object storage model used by the system satisfies the massive file storage with high-frequency reads and writes. A simple and unified application interface is provided for business using the storage system, a file caching strategy is applied to improve the resource utilization, and the rich automated tool chain of Kubernetes is adopted to make this storage system easier to deploy, easier to expand, and more stable than other storage systems. Experimental results indicate that the proposed storage system has a certain performance and stability improvement compared with the current mainstream object storage and file systems in the situation of large-scale fragmented data storage with more reads than writes.
To improve the accuracy and robustness of image edge detection, a new Canny edge detection algorithm based on Robust Principal Component Analysis (RPCA) was proposed. The image was decomposed into a principal component and a sparse component by RPCA. Then edge information of the principal component was extracted by Canny operator. The proposed algorithm formulated the problem of image edge detection as the edge detection of the principal component of the image. It eliminated the interference of image "stain" on the detection results and suppressed the noise. The experimental results show that the proposed algorithm outperforms Log, Canny and Susan edge detection algorithms in terms of both accuracy and robustness.
The traditional graph-based recommendation algorithm neglects the combined time factor which results in the poor recommendation quality. In order to solve this problem, a personalized recommendation algorithm integrating roulette walk and combined time effect was proposed. Based on the user-item bipartite graph, the algorithm introduced attenuation function to quantize combined time factor as association probability of the nodes; Then roulette selection model was utilized to select the next target node according to those associated probability of the nodes skillfully; Finally, the top-N recommendation for each user was provided. The experimental results show that the improved algorithm is better in terms of precision, recall and coverage index, compared with the conventional PersonalRank random-walk algorithm.
With the increase of application of satellite networks in emergency communication, and continuous growth of satellite terminal service types, the traffic may experience an instant augmentation showing a significant burst, and the data flow on the terminal also presents self-similarity. A method was propsed to generate satellite terminal self-similar traffic flow by using a superposition ON/OFF model with heavy-tailed distribution of time interval. And the effect of input of self-similar traffic flow on the packet loss rate, delay, and delay jitter was discussed, as well as the requirements on effective bandwidth. The relationship between packet loss rate at network terminal, delay, delay jitter and system cache was obtained by simulation, based on which, a method was put forward to reduce delay and decrease packet loss rate, providing theoretical support for efficient information transmission under condition of limited bandwidth and system cache.
Concerning the shopping information Web page constructed by template, and the large number of Web information and complex Web structure, this paper studied how to extract the shopping information from the Web page template by not using the complex learning rule. The paper defined the Web page template and the extraction template of Web page and designed template language that was used to construct the template. This paper also gave a model of extraction based on template. The experimental results show that the recall rate of the proposed method is 12% higher than the Extraction problem Algorithm (EXALG) by testing the standard 450 Web pages; the results also show that the recall rate of this method is 7.4% higher than Visual information and Tag structure based wrapper generator (ViNTs) method and 0.2% higher than Augmenting automatic information extraction with visual perceptions (ViPER) method and the accuracy rate of this method is 5.2% higher than ViNTs method and 0.2% higher than ViPER method by testing the standard 250 Web pages. The recall rate and the accuracy rate of the extraction method based on the rapid construction template are improved a lot which makes the accuracy of the Web page analysis and the recall rate of the information in the shopping information retrieval and the shopping comparison system improve a lot .
The seizure detection is important for the localization and classification of epileptic seizures. In order to solve the problem brought by large amount of data and high feature space in EEG (Electroencephalograph) for quickly and accurately detecting the seizures, a method based on max-Relevance and Min-Redundancy (mRMR) criteria and Extreme Learning Machine (ELM) was proposed. The time-frequency measures by Short-Time Fourier Transform (STFT) were extracted as features, and the large set of features were selected based on max-relevance and min-redundancy criteria. The states were classified using the extreme learning machine, Support Vector Machine (SVM) and Back Propagation (BP) algorithm. The result shows that the performance of ELM is better than SVM and BP algorithms in terms of computation time and classification accuracy. The classification accuracy rate of interictal durations and seizures can reach more than 98%, and the computation efficiency is only 0.8s. This approach can detect epileptic seizures accurately in real-time.